Picture for Yihong Dong

Yihong Dong

Do Transformers Have the Ability for Periodicity Generalization?

Add code
Jan 30, 2026
Viaarxiv icon

KOCO-BENCH: Can Large Language Models Leverage Domain Knowledge in Software Development?

Add code
Jan 19, 2026
Viaarxiv icon

EvoCoT: Overcoming the Exploration Bottleneck in Reinforcement Learning

Add code
Aug 11, 2025
Viaarxiv icon

A Survey on Code Generation with LLM-based Agents

Add code
Jul 31, 2025
Figure 1 for A Survey on Code Generation with LLM-based Agents
Figure 2 for A Survey on Code Generation with LLM-based Agents
Figure 3 for A Survey on Code Generation with LLM-based Agents
Figure 4 for A Survey on Code Generation with LLM-based Agents
Viaarxiv icon

RL-PLUS: Countering Capability Boundary Collapse of LLMs in Reinforcement Learning with Hybrid-policy Optimization

Add code
Jul 31, 2025
Viaarxiv icon

SATURN: SAT-based Reinforcement Learning to Unleash Language Model Reasoning

Add code
May 22, 2025
Viaarxiv icon

Rethinking Repetition Problems of LLMs in Code Generation

Add code
May 15, 2025
Viaarxiv icon

Thinking Longer, Not Larger: Enhancing Software Engineering Agents via Scaling Test-Time Compute

Add code
Mar 31, 2025
Viaarxiv icon

FANformer: Improving Large Language Models Through Effective Periodicity Modeling

Add code
Feb 28, 2025
Viaarxiv icon

Why language models collapse when trained on recursively generated text

Add code
Dec 19, 2024
Figure 1 for Why language models collapse when trained on recursively generated text
Figure 2 for Why language models collapse when trained on recursively generated text
Figure 3 for Why language models collapse when trained on recursively generated text
Figure 4 for Why language models collapse when trained on recursively generated text
Viaarxiv icon